Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
我们考虑在重复的未知游戏中进行规避风险的学习,在这种游戏中,代理商的目标是最大程度地减少其个人产生高成本的风险。具体而言,代理商使用处于风险的条件值(CVAR)作为风险措施,并以每集选定动作的成本值的形式依靠强盗反馈来估算其CVAR值并更新其动作。使用匪徒反馈来估计CVAR的一个主要挑战是,代理只能访问其自身的成本值,但是,这取决于所有代理的行为。为了应对这一挑战,我们提出了一种新的规避风险的学习算法,并利用有关成本价值的完整历史信息。我们表明,该算法实现了子线性的遗憾,并匹配了文献中最著名的算法。我们为欧洲大师游戏提供了数值实验,该游戏表明我们的方法表现优于现有方法。
translated by 谷歌翻译
尽管具有明显的区分靶向分布样本的能力,但深度神经网络在检测异常分布数据方面的性能差。为了解决此缺陷,最先进的解决方案选择在离群值的辅助数据集上训练深网。这些辅助离群值的各种培训标准是根据启发式直觉提出的。但是,我们发现这些直观设计的离群训练标准可能会损害分布学习,并最终导致劣等的表现。为此,我们确定了分布不兼容的三个原因:矛盾的梯度,错误的可能性和分布变化。基于我们的新理解,我们通过调整深层模型和损耗函数的顶级设计,提出一种新的分布检测方法。我们的方法通过减少对分布特征的概率特征的干扰来实现分布兼容性。在几个基准上,我们的方法不仅可以实现最新的分布检测性能,而且还提高了分布精度。
translated by 谷歌翻译
谷仓(基准自动驾驶机器人导航)挑战在宾夕法尼亚州费城的2022年IEEE国际机器人和自动化国际会议(ICRA 2022)举行。挑战的目的是评估最先进的自动地面导航系统,以安全有效的方式将机器人通过高度约束的环境移动。具体而言,任务是将标准化的差分驱动地面机器人从预定义的开始位置导航到目标位置,而不会与模拟和现实世界中的任何障碍相撞。来自世界各地的五支球队参加了合格的模拟比赛,其中三支受邀在费城会议中心的一组身体障碍课程中相互竞争。竞争结果表明,尽管表面上显得简单,即使对于经验丰富的机器人主义者来说,在高度约束空间中的自主地面导航实际上远非解决问题。在本文中,我们讨论了挑战,前三名获胜团队所使用的方法以及学到的教训以指导未来的研究。
translated by 谷歌翻译
精确学习动力学模型是基于模型的增强学习(MBRL)的重要目标,但是大多数MBRL方法都学习了一个易于虚假相关性的密集动力学模型,因此对看不见的状态的推广不佳。在本文中,我们引入了与任务无关的状态抽象(CDL)的因果动力学学习,该学习首先学习了理论上证明的因果动力学模型,该模型消除了状态变量和动作之间不必要的依赖性,从而很好地推广到了看不见的状态。然后可以从学习的动力学中得出状态抽象,这不仅提高了样本效率,而且还适用于与现有状态抽象方法更广泛的任务范围。在两个模拟环境和下游任务上进行了评估,所提出的方法学到的动力学模型和政策都可以很好地推广到看不见的状态,而派生的态度抽象则提高了样本效率,而没有它。
translated by 谷歌翻译
我们考虑使用具有规避风险的代理商的在线随机游戏,其目标是学习最佳决策,以最大程度地减少产生高昂成本的风险。具体而言,我们使用处于风险的条件值(CVAR)作为一种风险度量,代理可以以仅选择其选定动作的成本值的形式使用Bandit反馈来估算。由于成本函数的分布取决于所有通常无法观察的代理的行为,因此它们本身是未知的,因此,成本的CVAR值很难计算。为了应对这一挑战,我们提出了一种新的避免在线风险的学习算法,该算法依赖于使用CVAR值计算的CVAR梯度的单点零级估计,这些算法是通过适当采样成本函数估算的CVAR值。我们表明,该算法以很高的可能性实现了子线性的遗憾。我们还提出了该算法的两种变体,以提高性能。第一个变体依赖于一种新的采样策略,该策略使用上一个迭代中的样本来提高CVAR值的估计精度。第二个变体采用残留反馈,该反馈使用上一个迭代中的CVAR值来减少CVAR梯度估计的方差。我们从理论上分析了这些变体的收敛属性,并说明了它们在在线市场问题上的表现,我们将其模拟为ournot游戏。
translated by 谷歌翻译
虽然注意事项是您需要的,但可能证明是真的,我们不知道为什么:基于关注的变压器模型如BERT是优越的,但如何从输入代币流到输出预测的信息如何尚不清楚。我们通过变压器模型介绍影响模式,通过变压器模型抽象。模式量化并定向通过一系列模型节点的路径流程。在实验上,我们发现BERT中的信息流程的重要部分通过跳过连接而不是注意头。我们进一步表明,跨实例的模式一致性是BERT性能的指标。最后,我们展示了比以前的基于关注和基于层的方法更多的模型性能。
translated by 谷歌翻译
We present Spider, a large-scale, complex and cross-domain semantic parsing and textto-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and textto-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at https://yale-lily. github.io/spider.
translated by 谷歌翻译
Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing a complex scene containing a variety of objects. This work presents DisCoScene: a 3Daware generative model for high-quality and controllable scene synthesis. The key ingredient of our method is a very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the scene layout prior, which is simple to obtain, general to describe various scene contents, and yet informative to disentangle objects and background. Moreover, it serves as an intuitive user control for scene editing. Based on such a prior, the proposed model spatially disentangles the whole scene into object-centric generative radiance fields by learning on only 2D images with the global-local discrimination. Our model obtains the generation fidelity and editing flexibility of individual objects while being able to efficiently compose objects and the background into a complete scene. We demonstrate state-of-the-art performance on many scene datasets, including the challenging Waymo outdoor dataset. Project page: https://snap-research.github.io/discoscene/
translated by 谷歌翻译
Automated slicing aims to identify subsets of evaluation data where a trained model performs anomalously. This is an important problem for machine learning pipelines in production since it plays a key role in model debugging and comparison, as well as the diagnosis of fairness issues. Scalability has become a critical requirement for any automated slicing system due to the large search space of possible slices and the growing scale of data. We present Autoslicer, a scalable system that searches for problematic slices through distributed metric computation and hypothesis testing. We develop an efficient strategy that reduces the search space through pruning and prioritization. In the experiments, we show that our search strategy finds most of the anomalous slices by inspecting a small portion of the search space.
translated by 谷歌翻译